237 research outputs found

    An Evidence of Link between Default and Loss of Bank Loans from the Modeling of Competing Risks

    Full text link
    In this paper, we propose a method that provides a useful technique to compare relationship between risks involved that takes customer become defaulter and debt collection process that might make this defaulter recovered. Through estimation of competitive risks that lead to realization of the event of interest, we showed that there is a significant relation between the intensity of default and losses from defaulted loans in collection processes. To reach this goal, we investigate a competing risks model applied to whole credit risk cycle into a bank loans portfolio. We estimated competing causes related to occurrence of default, thereafter, comparing it with estimated competing causes that lead loans to write-off condition. In context of modeling competing risks, we used a specification of Poisson distribution for numbers from competing causes and Weibull distribution for failures times. The likelihood maximum estimation is used to parameters estimation and the model is applied to a real data of personal loansComment: 8 page

    Bayesian model averaging: A systematic review and conceptual classification

    Full text link
    Bayesian Model Averaging (BMA) is an application of Bayesian inference to the problems of model selection, combined estimation and prediction that produces a straightforward model choice criteria and less risky predictions. However, the application of BMA is not always straightforward, leading to diverse assumptions and situational choices on its different aspects. Despite the widespread application of BMA in the literature, there were not many accounts of these differences and trends besides a few landmark revisions in the late 1990s and early 2000s, therefore not taking into account any advancements made in the last 15 years. In this work, we present an account of these developments through a careful content analysis of 587 articles in BMA published between 1996 and 2014. We also develop a conceptual classification scheme to better describe this vast literature, understand its trends and future directions and provide guidance for the researcher interested in both the application and development of the methodology. The results of the classification scheme and content review are then used to discuss the present and future of the BMA literature

    Feature Selection Approach with Missing Values Conducted for Statistical Learning: A Case Study of Entrepreneurship Survival Dataset

    Full text link
    In this article, we investigate the features which enhanced discriminate the survival in the micro and small business (MSE) using the approach of data mining with feature selection. According to the complexity of the data set, we proposed a comparison of three data imputation methods such as mean imputation (MI), k-nearest neighbor (KNN) and expectation maximization (EM) using mutually the selection of variables technique, whereby t-test, then through the data mining process using logistic regression classification methods, naive Bayes algorithm, linear discriminant analysis and support vector machine hence comparing their respective performances. The experimental results will be spread in developing a model to predict the MSE survival, providing a better understanding in the topic once it is a significant part of the Brazilian' GPA and macroeconomy

    Classification methods applied to credit scoring: A systematic review and overall comparison

    Full text link
    The need for controlling and effectively managing credit risk has led financial institutions to excel in improving techniques designed for this purpose, resulting in the development of various quantitative models by financial institutions and consulting companies. Hence, the growing number of academic studies about credit scoring shows a variety of classification methods applied to discriminate good and bad borrowers. This paper, therefore, aims to present a systematic literature review relating theory and application of binary classification techniques for credit scoring financial analysis. The general results show the use and importance of the main techniques for credit rating, as well as some of the scientific paradigm changes throughout the years

    The Long Term Fr\'echet distribution: Estimation, Properties and its Application

    Full text link
    In this paper a new long-term survival distribution is proposed. The so called long term Fr\'echet distribution allows us to fit data where a part of the population is not susceptible to the event of interest. This model may be used, for example, in clinical studies where a portion of the population can be cured during a treatment. It is shown an account of mathematical properties of the new distribution such as its moments and survival properties. As well is presented the maximum likelihood estimators (MLEs) for the parameters. A numerical simulation is carried out in order to verify the performance of the MLEs. Finally, an important application related to the leukemia free-survival times for transplant patients are discussed to illustrates our proposed distributionComment: 13 pages, 2 figures, 7 table

    BayesDccGarch - An Implementation of Multivariate GARCH DCC Models

    Full text link
    Multivariate GARCH models are important tools to describe the dynamics of multivariate times series of financial returns. Nevertheless, these models have been much less used in practice due to the lack of reliable software. This paper describes the {\tt R} package {\bf BayesDccGarch} which was developed to implement recently proposed inference procedures to estimate and compare multivariate GARCH models allowing for asymmetric and heavy tailed distributions

    Maximum Likelihood Estimation for the Weight Lindley Distribution Parameters under Different Types of Censoring

    Full text link
    In this paper the maximum likelihood equations for the parameters of the Weight Lindley distribution are studied considering different types of censoring, such as, type I, type II and random censoring mechanism. A numerical simulation study is perform to evaluate the maximum likelihood estimates. The proposed methodology is illustrated in a real data set.Comment: 19 pg

    The Frechet distribution: Estimation and Application an Overview

    Full text link
    In this article, we consider the problem of estimating the parameters of the Fr\'echet distribution from both frequentist and Bayesian points of view. First we briefly describe different frequentist approaches, namely, maximum likelihood, method of moments, percentile estimators, L-moments, ordinary and weighted least squares, maximum product of spacings, maximum goodness-of-fit estimators and compare them with respect to mean relative estimates, mean squared errors and the 95\% coverage probability of the asymptotic confidence intervals using extensive numerical simulations. Next, we consider the Bayesian inference approach using reference priors. The Metropolis-Hasting algorithm is used to draw Markov Chain Monte Carlo samples, and they have in turn been used to compute the Bayes estimates and also to construct the corresponding credible intervals. Five real data sets related to the minimum flow of water on Piracicaba river in Brazil are used to illustrate the applicability of the discussed procedures

    Analyzing Volleyball Data on a Compositional Regression Model Approach: An Application to the Brazilian Men's Volleyball Super League 2011/2012 Data

    Full text link
    Volleyball has become a competitive sport with high physical and technical performance. Matches results are based on the players and teams'skills as technical and tactical strategies to succeed in a championship. At this point, some studies are carried out on the performance analysis of different match elements, contributing to the development of this sport. In this paper, we proposed a new approach to analyze volleyball data. The study is based on the compositional data methodology modeling in regression model. The parameters are obtained through the maximum likelihood. We performed a simulation study to evaluate the estimation procedure in compositional regression model and we illustrated the proposed methodology considering real data set of volleyball.Comment: 12 page

    The zero-inflated promotion cure rate regression model applied to fraud propensity in bank loan applications

    Full text link
    In this paper we extend the promotion cure rate model proposed by Chen et al (1999), by incorporating excess of zeros in the modelling. Despite allowing to relate the covariates to the fraction of cure, the current approach, which is based on a biological interpretation of the causes that trigger the event of interest, does not enable to relate the covariates to the fraction of zeros. The presence of zeros in survival data, unusual in medical studies, can frequently occur in banking loan portfolios, as presented in Louzada et al (2015), where they deal with propensity to fraud in lending loans in a major Brazilian bank. To illustrate the new cure rate survival method, the same real dataset analyzed in Louzada et al (2015) is fitted here, and the results are compared.Comment: 13 pages, 2 figures, 6 tables. arXiv admin note: text overlap with arXiv:1509.0524
    • …
    corecore